facial recognition ai
China Boasts of 'Mind-reading' Artificial Intelligence that Supports 'AI-tocracy'
An artificial intelligence (AI) institute in Hefei, in China's Anhui province, says it has developed software that can gauge the loyalty of Communist Party members – something that, if true, would be considered a breakthrough, but has sparked public outcry. Analysts said China has improved its AI-powered surveillance, using big data, machine learning, facial recognition and AI to "get into the brains and minds of its people," building what many call a draconian digital dictatorship. The institute posted a video called "The Smart Political Education Bar," on July 1 to boast about its "mind-reading" software, which it said would be used on party members to "further solidify their determination to be grateful to the party, listen to the party and follow the party." In the video, a subject was seen scrolling through online material that promotes party policy at a kiosk, where the institute said its AI software was monitoring his reaction to see how attentive he was to the party's thought education. The post, however, was taken down shortly after sparking a public outcry among Chinese netizens.
- Asia > China > Anhui Province > Hefei (0.26)
- Asia > Taiwan (0.06)
- North America > United States > Massachusetts (0.05)
- (4 more...)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.49)
- Information Technology > Security & Privacy (0.31)
- Media > News (0.30)
Microsoft will phase out facial recognition AI that could detect emotions
Microsoft is keenly aware of the mounting backlash toward facial recognition, and it's shuttering a significant project in response. The company has revealed it will "retire" facial recognition technology that it said could infer emotions as well as characteristics like age, gender and hair. The AI raised privacy questions, Microsoft said, and offering a framework created the potential for discrimination and other abuses. There was also no clear consensus on the definition of emotions, and no way to create a generalized link between expressions and emotions. New users of Microsoft's Face programming framework no longer have access to these attribute detection features.
What Is AI Bias and How Can Developers Avoid It?
Artificial intelligence capabilities are expanding exponentially, with AI now being utilized in industries from advertising to medical research. The use of AI in more sensitive areas such as facial recognition software, hiring algorithms, and healthcare provision, have precipitated debate about bias and fairness. Bias is a well-researched facet of human psychology. Research regularly exposes our unconscious preferences and prejudices, and now we see AI reflect some of these biases in their algorithms. So, how does artificial intelligence become biased? And why does this matter?
Tech Giants Back Off Selling Facial Recognition AI to Police - InformationWeek
Artificial intelligence technologies offer a lot of potential to improve the world. Simulations could speed up disease and drug research, autonomous vehicles could cut energy use and its impact on the environment, and facial recognition could help quickly identify missing children. But there's a flip side to the good, and some major technology companies acknowledged the potential issues with facial recognition software last week, with IBM halting development while Amazon and Microsoft pledged to not sell the technology to the police for a set period of time. The moves come in the wake of incidents of police violence at widespread protests across the country in response to the death of George Floyd at the hands of police in Minneapolis in May. Privacy advocates have opposed the use of facial recognition software for years, saying it could be abused by the government to surveille and harass citizens.
The benefits of facial recognition AI are being wildly overstated
Facial recognition technology has run amok across the globe. In the US it continues to perpetuate at an alarming rate despite bipartisan push-back from politicians and several geographical bans. Even China's government has begun to question whether there's enough benefit to the use of ubiquitous surveillance tech to justify the utter destruction of public privacy. The truth of the matter is that facial recognition technology serves only two legitimate purposes: access control and surveillance. And, far too often, the people developing the technology aren't the ones who ultimately determine how it's used. Most decent, law-abiding citizens don't mind being filmed in public and, to a certain degree, would tend to take no exception to the use of facial recognition technology in places where it makes sense.
- Asia > China (0.55)
- North America > United States (0.25)
- Europe > Netherlands > North Holland > Amsterdam (0.05)
Facial recognition AI can't identify trans and non-binary people
Facial-recognition software from major tech companies is apparently ill-equipped to work on transgender and non-binary people, according to new research. A recent study by computer-science researchers at the University of Colorado Boulder found that major AI-based facial analysis tools--including Amazon's Rekognition, IBM's Watson, Microsoft's Azure, and Clarifai--habitually misidentified non-cisgender people. They eliminated instances in which multiple individuals were in the photo, or where at least 75% of the person's face wasn't visible. The images were then divided by hashtag, amounting to 350 images in each group. Scientists then tested each group against the facial analysis tools of the four companies.
Microsoft seeks to restrict abuse of its facial recognition AI
Microsoft is planning to implement self-designed ethical principles for its facial recognition technology by the end of March, as it urges governments to push ahead with matching regulation in the field. The company in December called for new legislation to govern artificial intelligence software for recognising faces, advocating for human review and oversight of the technology in some critical cases, as a way to mitigate the risks of biased outcomes, intrusions into privacy and democratic freedoms. "We do need to lead by example and we're working to do that," Microsoft President and chief legal officer Brad Smith said in an interview, adding that some other companies are also putting similar principles into place. Smith said the company plans by the end of March to "operationalise" its principles, which involves drafting policies, building governance systems and engineering tools and testing to make sure it's in line with its goals. It also involves setting controls for the company's global sales and consulting teams to prevent selling the technology in cases where it risks being used for an unwanted purpose.
- North America > United States (0.06)
- Asia > China > Beijing > Beijing (0.06)
- Law (1.00)
- Government (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (0.60)
- North America > United States > Oregon > Washington County (0.05)
- North America > United States > Missouri > Oregon County (0.05)
- North America > United States > Iowa (0.05)
- Asia > India > NCT > New Delhi (0.05)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Services (0.72)
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.61)
Microsoft calls for laws to prevent bias in facial recognition AI
Microsoft Corp. called for new legislation to govern artificial intelligence software for recognizing faces, advocating for human review and oversight of the technology in critical cases. "This includes where decisions may create a risk of bodily or emotional harm to a consumer, where there may be implications on human or fundamental rights, or where a consumer's personal freedom or privacy may be impinged," Microsoft President and Chief Legal Officer Brad Smith wrote in a blog published in conjunction with a speech on the topic at the Brookings Institution think tank. Sellers of the technology must "recognize that they are not absolved of their obligation to comply with laws prohibiting discrimination against individual consumers or groups of consumers," he added. Smith also wants laws to require sellers of the products to explain what they do clearly and open up their services to testing by outside parties for accuracy and bias. Earlier Thursday, advocacy group AI Now called for greater regulation and regular audits of AI tools used by governments.
- Law (1.00)
- Government (1.00)
Astronomers train Facebook's facial recognition AI to spot 'burping' black holes in deep space
Astronomers have trained Facebook's facial recognition software to spot'burping' black holes in deep space. The artificial intelligence (AI) tool is programmed to pick out radio galaxies out from scans taken by radio telescopes. These rare galaxies spew powerful radio jets from the supermassive black holes at their centres, and scientists believe they hold clues to the structure of the universe. Using the new programme, dubbed ClaRAN, experts at the University of Western Australia hope to make it easier to spot the elusive galaxies - using the radio signals fired from their black holes. Astronomers have trained Facebook's facial recognition software to spot'burping' black holes in deep space.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.83)